-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: extend #1812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: extend #1812
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughIncreased the API route maxDuration from 180 to 300 seconds in Changes
Sequence Diagram(s)(omitted — change is a simple config update with no control-flow modifications) Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI 💡 Knowledge Base configuration:
You can enable these settings in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
pages/api/links/download/bulk.ts (2)
11-14: Consider making the Lambda call asynchronous to avoid long-held HTTP connections.Keeping the request open for up to 15 minutes hurts UX, ties up concurrency, and is brittle across proxies. Prefer:
- Fire-and-forget Lambda (InvocationType.Event) or a queue/Step Function
- Return 202 + operationId immediately
- Poll a status endpoint or use callbacks/webhooks to deliver the download URL when ready
This reduces timeouts and improves resilience for large bulk downloads.
11-14: Document the rationale for the 900s bump.Future readers will benefit from knowing this aligns with Lambda’s 15-min sync ceiling and large dataroom zips.
Apply this small comment for context:
export const config = { + // Allow up to 15 minutes for bulk downloads to complete (aligns with AWS Lambda's 15-min sync limit). maxDuration: 900, memory: 2048, };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
pages/api/links/download/bulk.ts(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
| export const config = { | ||
| maxDuration: 180, | ||
| maxDuration: 900, | ||
| memory: 2048, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify 900s maxDuration is actually honored for pages/api routes and your hosting plan.
Bumping to 900s is reasonable for large zips, but many platforms cap synchronous Serverless HTTP at lower limits (often 60–300s), or only allow 900s for background jobs. If unsupported, clients will still see timeouts/504s long before Lambda returns.
Run these to cross-check consistency and hosting constraints:
And confirm provider support/limits:
🏁 Script executed:
#!/bin/bash
# 1) Where else do we set maxDuration? Ensure consistency across API routes.
rg -n -A 2 -B 2 'maxDuration\s*:' --no-ignore
# 2) Check if any routes are set to edge runtime (AWS SDK won't work there).
rg -n -A 1 -B 1 'export const runtime\s*=' --no-ignoreLength of output: 4625
🌐 Web query:
For Next.js pages/api deployed on Vercel, does `export const config = { maxDuration }` apply, and is 900 seconds supported for synchronous Serverless Functions? If not, what are the supported max durations and the correct way to run up to 15-minute jobs?
💡 Result:
The export const config = { maxDuration } directive does apply to Next.js API routes (pages/api) deployed on Vercel, but the maximum value supported for synchronous Serverless Functions varies by plan and whether “fluid compute” is enabled. Running synchronous functions for 900 seconds (15 minutes) is generally only possible on Enterprise plans without fluid compute enabled, and is not supported for synchronous functions on most plans or configurations[1][8].
Key duration limits for Next.js API routes on Vercel:
| Plan | Fluid Compute | Max Duration (seconds) |
|---|---|---|
| Hobby | Enabled | 300 |
| Pro | Enabled | 800 |
| Enterprise | Enabled | 800 |
| Hobby | Disabled | 60 |
| Pro | Disabled | 300 |
| Enterprise | Disabled | 900 |
- Fluid compute is enabled by default for most projects[1][8].
- With fluid compute enabled, Enterprise and Pro both max out at 800 seconds (13 minutes 20 seconds).
- Only Enterprise plans with fluid compute disabled can reach 900 seconds (15 minutes)[8][1].
How to set function duration:
-
Set
maxDurationin the API route file as part of the exported config:export const config = { maxDuration: 800 };
-
Alternatively, configure maxDuration per-route or with glob patterns in your
vercel.json[1].
Running synchronous jobs up to 15 minutes:
- Not supported for synchronous serverless functions unless you are on Enterprise, with fluid compute disabled. For all other scenarios, you are limited to 800 seconds, or less depending on plan/configuration[1][8].
- For background tasks longer than your duration limit, use asynchronous techniques: queue the job to a persistent background worker service (for example, using queues like AWS SQS, or third-party background services), and return an initial response to the client rather than blocking for completion[9].
- If you need to perform additional work after the response, investigate the use of
waitUntilor Next.js's experimentalafterAPI in conjunction with fluid compute. This allows background processing, but is only suitable for non-blocking, fire-and-forget tasks and may still be duration-constrained[9].
Summary of best practices for long-running jobs:
- For jobs up to 13:20 minutes: use
maxDuration: 800and Pro/Enterprise plan with fluid compute[1][8]. - For jobs up to 15 minutes: Only use synchronous API routes on Enterprise plans with fluid compute explicitly disabled[8].
- For any jobs requiring reliable execution over 800 seconds, or if not on Enterprise, run the task asynchronously outside the Vercel serverless execution environment.
In conclusion:
900 seconds (15 minutes) is only supported for synchronous serverless functions on Enterprise plans with fluid compute disabled[8][1]. For most use cases on Vercel, the max achievable duration for synchronous serverless (API) routes is 800 seconds on Pro/Enterprise with fluid compute enabled. To run jobs up to 15 minutes or longer, delegate work to a background worker/service so that your API route can respond within platform-mandated limits.
References:
[1]: Vercel Docs – Maximum Duration Configuration
[8]: Vercel Docs – Vercel Functions Limits
[9]: Vercel Docs – Avoiding Timeout, Fluid Compute
[10]: Vercel Docs – Configuring Functions
Citations:
- 1: https://vercel.com/docs/functions/configuring-functions/duration
- 2: https://www.youtube.com/watch?v=YuzKlQxMVUI
- 3: https://community.vercel.com/t/build-warning-in-deployment-after-adding-vercel-json-file/507
- 4: maxDuration and server actions vercel/next.js#58855
- 5: https://vercel.com/changelog/serverless-functions-can-now-run-up-to-5-minutes
- 6: The
maxDurationsetting in the api route config, as per the documentation, isn't being honored on dev or on Vercel. We're on the pro plan. vercel/next.js#66462 - 7: https://vercel.com/docs/limits
- 8: https://vercel.com/docs/functions/limitations
- 9: https://vercel.com/guides/what-can-i-do-about-vercel-serverless-functions-timing-out
- 10: https://vercel.com/docs/functions/configuring-functions
Action: don’t assume 900s is available — confirm your hosting plan supports 15 minutes or reduce/offload the work.
Short summary: I ran the grep and a docs check. pages/api/links/download/bulk.ts exports maxDuration: 900; no pages/api routes use edge runtime (edge runtimes are in app/*). Vercel only allows 900s for synchronous functions on Enterprise plans with fluid compute explicitly disabled — most setups (Pro/Enterprise with fluid compute enabled) max out at ~800s, and lower limits apply on Hobby. If you’re not on Enterprise+fluid-compute-disabled, clients will hit platform timeouts before Lambda returns.
Files needing attention:
- pages/api/links/download/bulk.ts (lines ~11–14) — currently maxDuration: 900. Action: either confirm your hosting plan supports 900s, or change to 800 (safer for Vercel Pro/Enterprise) or move the long-running work to a background job/queue and return immediately.
- lib/trigger/export-visits.ts (line ~58) — also uses maxDuration: 900; verify how this trigger runs (background/worker vs. synchronous API). If it’s a true background worker, 900 may be acceptable.
- Note: other API routes use 180/120/60 — keep them as-is unless you need longer.
Suggested quick update (if you can’t guarantee 900s):
export const config = {
maxDuration: 800, // use 800s unless you confirm Enterprise + fluid compute disabled
memory: 2048,
};
If you plan to keep 900s, confirm with your infra/hosting owner that the project is on an Enterprise plan with fluid compute disabled.
🤖 Prompt for AI Agents
In pages/api/links/download/bulk.ts around lines 11–14 the exported config sets
maxDuration: 900 which may exceed the platform timeout for non-Enterprise (or
Enterprise with fluid compute enabled) Vercel plans; either confirm with your
infra/hosting owner that this project runs on Enterprise with fluid compute
disabled before keeping 900, or change the value to a safer 800 seconds and keep
memory 2048, or refactor the long-running work into a background job/queue and
return immediately; also check lib/trigger/export-visits.ts (around line 58) for
another maxDuration: 900 and apply the same decision (reduce to 800 or move to
background) as appropriate.
Summary by CodeRabbit